Project: Identify Customer Segments

In this project, you will apply unsupervised learning techniques to identify segments of the population that form the core customer base for a mail-order sales company in Germany. These segments can then be used to direct marketing campaigns towards audiences that will have the highest expected rate of returns. The data that you will use has been provided by our partners at Bertelsmann Arvato Analytics, and represents a real-life data science task.

This notebook will help you complete this task by providing a framework within which you will perform your analysis steps. In each step of the project, you will see some text describing the subtask that you will perform, followed by one or more code cells for you to complete your work. Feel free to add additional code and markdown cells as you go along so that you can explore everything in precise chunks. The code cells provided in the base template will outline only the major tasks, and will usually not be enough to cover all of the minor tasks that comprise it.

It should be noted that while there will be precise guidelines on how you should handle certain tasks in the project, there will also be places where an exact specification is not provided. There will be times in the project where you will need to make and justify your own decisions on how to treat the data. These are places where there may not be only one way to handle the data. In real-life tasks, there may be many valid ways to approach an analysis task. One of the most important things you can do is clearly document your approach so that other scientists can understand the decisions you've made.

At the end of most sections, there will be a Markdown cell labeled Discussion. In these cells, you will report your findings for the completed section, as well as document the decisions that you made in your approach to each subtask. Your project will be evaluated not just on the code used to complete the tasks outlined, but also your communication about your observations and conclusions at each stage.

Step 0: Load the Data

There are four files associated with this project (not including this one):

Each row of the demographics files represents a single person, but also includes information outside of individuals, including information about their household, building, and neighborhood. You will use this information to cluster the general population into groups with similar demographic properties. Then, you will see how the people in the customers dataset fit into those created clusters. The hope here is that certain clusters are over-represented in the customers data, as compared to the general population; those over-represented clusters will be assumed to be part of the core userbase. This information can then be used for further applications, such as targeting for a marketing campaign.

To start off with, load in the demographics data for the general population into a pandas DataFrame, and do the same for the feature attributes summary. Note for all of the .csv data files in this project: they're semicolon (;) delimited, so you'll need an additional argument in your read_csv() call to read in the data properly. Also, considering the size of the main dataset, it may take some time for it to load completely.

Once the dataset is loaded, it's recommended that you take a little bit of time just browsing the general structure of the dataset and feature summary file. You'll be getting deep into the innards of the cleaning in the first major step of the project, so gaining some general familiarity can help you get your bearings.

Tip: Add additional cells to keep everything in reasonably-sized chunks! Keyboard shortcut esc --> a (press escape to enter command mode, then press the 'A' key) adds a new cell before the active cell, and esc --> b adds a new cell after the active cell. If you need to convert an active cell to a markdown cell, use esc --> m and to convert to a code cell, use esc --> y.

Step 1: Preprocessing

Step 1.1: Assess Missing Data

The feature summary file contains a summary of properties for each demographics data column. You will use this file to help you make cleaning decisions during this stage of the project. First of all, you should assess the demographics data in terms of missing data. Pay attention to the following points as you perform your analysis, and take notes on what you observe. Make sure that you fill in the Discussion cell with your findings and decisions at the end of each step that has one!

Step 1.1.1: Convert Missing Value Codes to NaNs

The fourth column of the feature attributes summary (loaded in above as feat_info) documents the codes from the data dictionary that indicate missing or unknown data. While the file encodes this as a list (e.g. [-1,0]), this will get read in as a string object. You'll need to do a little bit of parsing to make use of it to identify and clean the data. Convert data that matches a 'missing' or 'unknown' value code into a numpy NaN value. You might want to see how much data takes on a 'missing' or 'unknown' code, and how much data is naturally missing, as a point of interest.

As one more reminder, you are encouraged to add additional cells to break up your analysis into manageable chunks.

Step 1.1.2: Assess Missing Data in Each Column

How much missing data is present in each column? There are a few columns that are outliers in terms of the proportion of values that are missing. You will want to use matplotlib's hist() function to visualize the distribution of missing value counts to find these columns. Identify and document these columns. While some of these columns might have justifications for keeping or re-encoding the data, for this project you should just remove them from the dataframe. (Feel free to make remarks about these outlier columns in the discussion, however!)

For the remaining features, are there any patterns in which columns have, or share, missing data?

As expected, the 6 outlier columns with more than 20% missing values are the very same ones as those with more than 200,000 missing values.

I will define the outlier columns as having more than 200,000 missing values.

As an alternative option, I also plotted a histogram only including columns with less than 200,000 missing values to find the almost-normal distribution right above this cell and will apply the empirical rule to treat all columns more than three standard deviations above the mean of this trimmed set as outliers.

Discussion 1.1.2: Assess Missing Data in Each Column

None of the features that could be one-hot encoded had anywhere near the outlier count cutoff, but for any NaNs those features had, all the resulting dummy variables will also be NaN, which may lead to some rows not being particularly useful if a lot of these features had missing values in the same row.

I decided to use the empirical rule outlier cutoff of three standard deviations above the mean in number of missing values because I felt the gap between the empirical outliers and those with more than 200,000 missing values was significant enough to only consider dropping the empirical outliers.

The ager typology AGER_TYP, consumer pattern KK_KUNDENTYP, and academic title flag TITEL_KZ columns each contain more than 50% missing values.

I believe AGER_TYP is also a bit of a redundant and overly specific feature since all its non-missing labels have to do with the elderly, and that GEBURTSJAHR is a better feature to focus on with regards to age, so I dropped AGER_TYP.

TITEL_KZ contains about 99.76% missing values, so I dropped it due to the lack of enough useful information.

KK_KUNDENTYP contains about 65.6% missing values but I decided to keep it for now because I feel it could still be potentially useful.

KBA05_BAUMAX was not flagged with this cutoff, but it was close enough that I decided to drop it as well.

Step 1.1.3: Assess Missing Data in Each Row

Now, you'll perform a similar assessment for the rows of the dataset. How much data is missing in each row? As with the columns, you should see some groups of points that have a very different numbers of missing values. Divide the data into two subsets: one for data points that are above some threshold for missing values, and a second subset for points below that threshold.

In order to know what to do with the outlier rows, we should see if the distribution of data values on columns that are not missing data (or are missing very little data) are similar or different between the two groups. Select at least five of these columns and compare the distribution of values.

Depending on what you observe in your comparison, this will have implications on how you approach your conclusions later in the analysis. If the distributions of non-missing features look similar between the data with many missing values and the data with few or no missing values, then we could argue that simply dropping those points from the analysis won't present a major issue. On the other hand, if the data with many missing values looks very different from the data with few or no missing values, then we should make a note on those data as special. We'll revisit these data later on. Either way, you should continue your analysis for now using just the subset of the data with few or no missing values.

I will define the outlier rows as having more than 20 missing values.

As an alternative, I have also set up variables that allow me to treat all values that are more than three standard deviations away from the mean as outliers (i.e. following the empirical rule).

Due to the very long tail at the right of the full alt_nasummary distribution, I have "zoomed in" and trimmed everything with more than 10 missing values to see if the distribution looks closer to normal. It was still fairly skewed, but a somewhat better fit for the empirical rule.

I think a reasonable cutoff for having "few missing values" would be for a row to have no more than 5 in both the manual and empirical cases. With 79 columns to a row, that would mean having no more than approximately 6.3% missing values.

I believe that it would simply be better to also keep a "few" cutoff of 20,000 missing values with this method simply due to HH_EINKOMMEN_SCORE having a far closer missing value count to 4,854 than it does to 73,499 (namely, 18,348).

Checking all columns just to be sure since there aren't a whole lot with my cutoffs:

Discussion 1.1.3: Assess Missing Data in Each Row

With the exception of around one or two columns, it seems that most rows with more missing values than the cutoff have a hugely disproportionate number of one specific value (like 3 in FINANZ_VORSORGER, 7 in SEMIO_KRIT, or 1 in VERS_TYP).

I have also decided to use the subset of few/no missing values based on the 20-nan cutoff in order to retain as much data as is reasonably possible. I will no longer use the empirical rule.

Step 1.2: Select and Re-Encode Features

Checking for missing data isn't the only way in which you can prepare a dataset for analysis. Since the unsupervised learning techniques to be used will only work on data that is encoded numerically, you need to make a few encoding changes or additional assumptions to be able to make progress. In addition, while almost all of the values in the dataset are encoded using numbers, not all of them represent numeric values. Check the third column of the feature summary (feat_info) for a summary of types of measurement.

In the first two parts of this sub-step, you will perform an investigation of the categorical and mixed-type features and make a decision on each of them, whether you will keep, drop, or re-encode each. Then, in the last part, you will create a new data frame with only the selected and engineered columns.

Data wrangling is often the trickiest part of the data analysis process, and there's a lot of it to be done here. But stick with it: once you're done with this step, you'll be ready to get to the machine learning parts of the project!

Step 1.2.1: Re-Encode Categorical Features

For categorical data, you would ordinarily need to encode the levels as dummy variables. Depending on the number of categories, perform one of the following:

Discussion 1.2.1: Re-Encode Categorical Features

I kept all remaining features from the column-dropping step for now. All multi-level categorical features were one-hot encoded and OST_WEST_KZ was re-encoded as a numerical column.

I may consider dropping them when tuning the clustering model if that improves its performance on new data.

Step 1.2.2: Engineer Mixed-Type Features

There are a handful of features that are marked as "mixed" in the feature summary that require special treatment in order to be included in the analysis. There are two in particular that deserve attention; the handling of the rest are up to your own choices:

Be sure to check Data_Dictionary.md for the details needed to finish these tasks.

Discussion 1.2.2: Engineer Mixed-Type Features

I decided to keep WOHNLAGE and PLZ8_BAUMAX without any further modification because each can be more or less encapsulated in a single dimension.

I also dropped LP_LEBENSPHASE_GROB because LP_LEBENSPHASE_FEIN has better information.

What remained of these three as well as PRAEGENDE_JUGENDJAHRE and CAMEO_INTL_2015 was one-hot encoded.

Step 1.2.3: Complete Feature Selection

In order to finish this step up, you need to make sure that your data frame now only has the columns that you want to keep. To summarize, the dataframe should consist of the following:

Make sure that for any new columns that you have engineered, that you've excluded the original columns from the final dataset. Otherwise, their values will interfere with the analysis later on the project. For example, you should not keep "PRAEGENDE_JUGENDJAHRE", since its values won't be useful for the algorithm: only the values derived from it in the engineered features you created should be retained. As a reminder, your data should only be from the subset with few or no missing values.

All re-engineering and column removal has already been performed.

Step 1.3: Create a Cleaning Function

Even though you've finished cleaning up the general population demographics data, it's important to look ahead to the future and realize that you'll need to perform the same cleaning steps on the customer demographics data. In this substep, complete the function below to execute the main feature selection, encoding, and re-engineering steps you performed above. Then, when it comes to looking at the customer data in Step 3, you can just run this function on that DataFrame to get the trimmed dataset in a single step.

Step 2: Feature Transformation

Step 2.1: Apply Feature Scaling

Before we apply dimensionality reduction techniques to the data, we need to perform feature scaling so that the principal component vectors are not influenced by the natural differences in scale for features. Starting from this part of the project, you'll want to keep an eye on the API reference page for sklearn to help you navigate to all of the classes and functions that you'll need. In this substep, you'll need to check the following:

I think it would be best to temporarily remove the missing values. I think replacing all missing values would add a bit too much complication to scaling; I would likely end up having to replace missing values in a way that would not affect the feature scaling, such as replacing them with the mean, or making them blatant outliers so they are omitted as such. It's better to just create a temporary variable to store data with no missing values, perform scaling on that, and then apply the fitted scaler to transform the original.

Discussion 2.1: Apply Feature Scaling

As stated above, I stored all rows with no missing values into a temporary variable, fit the scaler to that, and used the result to transform the original dataframe. I also used imputation to replace all missing values with the mean of the column in which they occur.

Step 2.2: Perform Dimensionality Reduction

On your scaled data, you are now ready to apply dimensionality reduction techniques.

Discussion 2.2: Perform Dimensionality Reduction

(Double-click this cell and replace this text with your own text, reporting your findings and decisions regarding dimensionality reduction. How many principal components / transformed features are you retaining for the next step of the analysis?)

While I did store three differently-reduced datasets into three different variables, the one I decided to stick with has 125 components. 125 components is when the returns on variance coverage start to diminish aggressively according to the explained variance ratio plot above. Additionally, it seems to cover about 99% of the variance.

Step 2.3: Interpret Principal Components

Now that we have our transformed principal components, it's a nice idea to check out the weight of each variable on the first few components to see if they can be interpreted in some fashion.

As a reminder, each principal component is a unit vector that points in the direction of highest variance (after accounting for the variance captured by earlier principal components). The further a weight is from zero, the more the principal component is in the direction of the corresponding feature. If two features have large weights of the same sign (both positive or both negative), then increases in one tend expect to be associated with increases in the other. To contrast, features with different signs can be expected to show a negative correlation: increases in one variable should result in a decrease in the other.

Discussion 2.3: Interpret Principal Components

I am honestly surprised that WOHNLAGE_0.0 has such a high weight. It makes me a little suspicious, but I think I'll just have to see how well the model performs instead of jumping to conclusions.

It's also interesting how the top three positive and negative features in the third principal component seem to actually have a higher-magnitude weight towards their component than any of the features plotted for the second. Also, none of the features in the plot for the second component happen to stand out much, which again almost feels too neat to be true.

The third component was a bit better, even if the magnitudes of the weights are still relatively close together, with estimated age, financial planning, and "fair-supplied" energy consumption standing out among the positively-weighted features, while all of the negatively-weighted features (namely active saving, "inconspicuous financial behaviour," investing, religiosity, traditionality, and dutifulness) have almost the same magnitude (around 0.2, or 20%) as those three.

Step 3: Clustering

Step 3.1: Apply Clustering to General Population

You've assessed and cleaned the demographics data, then scaled and transformed them. Now, it's time to see how the data clusters in the principal components space. In this substep, you will apply k-means clustering to the dataset and use the average within-cluster distances from each point to their assigned cluster's centroid to decide on a number of clusters to keep.

Discussion 3.1: Apply Clustering to General Population

I decided to use 30 clusters to make the clusters as tight as I could without using more clusters than advised.

Step 3.2: Apply All Steps to the Customer Data

Now that you have clusters and cluster centers for the general population, it's time to see how the customer data maps on to those clusters. Take care to not confuse this for re-fitting all of the models to the customer data. Instead, you're going to use the fits from the general population to clean, transform, and cluster the customer data. In the last step of the project, you will interpret how the general population fits apply to the customer data.

Step 3.3: Compare Customer Data to Demographics Data

At this point, you have clustered data based on demographics of the general population of Germany, and seen how the customer data for a mail-order sales company maps onto those demographic clusters. In this final substep, you will compare the two cluster distributions to see where the strongest customer base for the company is.

Consider the proportion of persons in each cluster for the general population, and the proportions for the customers. If we think the company's customer base to be universal, then the cluster assignment proportions should be fairly similar between the two. If there are only particular segments of the population that are interested in the company's products, then we should see a mismatch from one to the other. If there is a higher proportion of persons in a cluster for the customer data compared to the general population (e.g. 5% of persons are assigned to a cluster for the general population, but 15% of the customer data is closest to that cluster's centroid) then that suggests the people in that cluster to be a target audience for the company. On the other hand, the proportion of the data in a cluster being larger in the general population than the customer data (e.g. only 2% of customers closest to a population centroid that captures 6% of the data) suggests that group of persons to be outside of the target demographics.

Take a look at the following points in this step:

Centroids 1, 3, 17, and 25 have the largest discrepancies in representation by a noticeable margin, so I will focus on those four in particular.

Significant Discrepancies Between Centroid 6 and the Rest (Unordered):

Significant Discrepancies with Centroid 17:

Discussion 3.3: Compare Customer Data to Demographics Data

(Double-click this cell and replace this text with your own text, reporting findings and conclusions from the clustering analysis. Can we describe segments of the population that are relatively popular with the mail-order company, or relatively unpopular with the company?)

I focused on Centroids 1, 3, 17, and 25. 17 is overrepresented in the customer subset, while the other three are overrepresented in the general population compared to the subset. Below are major feature differences that this set of centroids brought to light:

In summary, this sample of clusters suggests that one likely customer segment will tend to have a noticeably higher inclination towards the following traits than an the average person:

Congratulations on making it this far in the project! Before you finish, make sure to check through the entire notebook from top to bottom to make sure that your analysis follows a logical flow and all of your findings are documented in Discussion cells. Once you've checked over all of your work, you should export the notebook as an HTML document to submit for evaluation. You can do this from the menu, navigating to File -> Download as -> HTML (.html). You will submit both that document and this notebook for your project submission.